Training a perceptron in a discrete weight space
نویسندگان
چکیده
منابع مشابه
Training a perceptron in a discrete weight space.
Learning in a perceptron having a discrete weight space, where each weight can take 2L+1 different values, is examined analytically and numerically. The learning algorithm is based on the training of the continuous perceptron and prediction following the clipped weights. The learning is described by a new set of order parameters, composed of the overlaps between the teacher and the continuous/c...
متن کاملSequential Competitive Facility Location Problem in a Discrete Planar Space
In this paper, there are two competitors in a planar market. The first competitor, called, the leader, opens new facilities. After that, the second competitor, the follower, reacts to the leader’s action and opens r new facilities. The leader and the follower have got some facilities in advance in this market. The optimal locations for leader and follower are chosen among predefined candida...
متن کاملOptimal Weight Decay in a Perceptron
Weight decay was proposed to reduce over tting as it often appears in the learning tasks of arti cial neural networks. In this paper weight decay is applied to a well de ned model system based on a single layer perceptron, which exhibits strong over tting. Since the optimal non-over tting solution is known for this system, we can compare the effect of the weight decay with this solution. A stra...
متن کاملMultilayer Perceptron Training
In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many well-known rules about the eeect of control variables to the target variables. With the presented algorithm the basically data driven neural network model can ...
متن کاملSolution Space of Perceptron
Above figures show that if the solution space only exists at S=1 space and the initial weights are at S = −1 space, the training path needs to pass the origin to change space. It means w3 changes from positive value to negative value. Compare the learning behaviors of two different initial weights, Wa = [1,−2.5, 2] and Wb = 0.5Wa = [0.5,−1.25, 1], we’ll find the bigger absolute value of w3 caus...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Physical Review E
سال: 2001
ISSN: 1063-651X,1095-3787
DOI: 10.1103/physreve.64.046109